24 research outputs found
Context Models For Web Search Personalization
We present our solution to the Yandex Personalized Web Search Challenge. The
aim of this challenge was to use the historical search logs to personalize
top-N document rankings for a set of test users. We used over 100 features
extracted from user- and query-depended contexts to train neural net and
tree-based learning-to-rank and regression models. Our final submission, which
was a blend of several different models, achieved an NDCG@10 of 0.80476 and
placed 4'th amongst the 194 teams winning 3'rd prize
Meta-learning on heterogeneous information networks for cold-start recommendation
National Research Foundation (NRF) Singapore under its AI Singapore Programm
ProxyFL: Decentralized Federated Learning through Proxy Model Sharing
Institutions in highly regulated domains such as finance and healthcare often
have restrictive rules around data sharing. Federated learning is a distributed
learning framework that enables multi-institutional collaborations on
decentralized data with improved protection for each collaborator's data
privacy. In this paper, we propose a communication-efficient scheme for
decentralized federated learning called ProxyFL, or proxy-based federated
learning. Each participant in ProxyFL maintains two models, a private model,
and a publicly shared proxy model designed to protect the participant's
privacy. Proxy models allow efficient information exchange among participants
using the PushSum method without the need of a centralized server. The proposed
method eliminates a significant limitation of canonical federated learning by
allowing model heterogeneity; each participant can have a private model with
any architecture. Furthermore, our protocol for communication by proxy leads to
stronger privacy guarantees using differential privacy analysis. Experiments on
popular image datasets, and a pan-cancer diagnostic problem using over 30,000
high-quality gigapixel histology whole slide images, show that ProxyFL can
outperform existing alternatives with much less communication overhead and
stronger privacy
Off-line vs. On-line Evaluation of Recommender Systems in Small E-commerce
In this paper, we present our work towards comparing on-line and off-line
evaluation metrics in the context of small e-commerce recommender systems.
Recommending on small e-commerce enterprises is rather challenging due to the
lower volume of interactions and low user loyalty, rarely extending beyond a
single session. On the other hand, we usually have to deal with lower volumes
of objects, which are easier to discover by users through various
browsing/searching GUIs.
The main goal of this paper is to determine applicability of off-line
evaluation metrics in learning true usability of recommender systems (evaluated
on-line in A/B testing). In total 800 variants of recommending algorithms were
evaluated off-line w.r.t. 18 metrics covering rating-based, ranking-based,
novelty and diversity evaluation. The off-line results were afterwards compared
with on-line evaluation of 12 selected recommender variants and based on the
results, we tried to learn and utilize an off-line to on-line results
prediction model.
Off-line results shown a great variance in performance w.r.t. different
metrics with the Pareto front covering 68\% of the approaches. Furthermore, we
observed that on-line results are considerably affected by the novelty of
users. On-line metrics correlates positively with ranking-based metrics (AUC,
MRR, nDCG) for novice users, while too high values of diversity and novelty had
a negative impact on the on-line results for them. For users with more visited
items, however, the diversity became more important, while ranking-based
metrics relevance gradually decrease.Comment: Submitted to ACM Hypertext 2020 Conferenc